Name | Version | Summary | date |
vsscunet |
2.0.0 |
SCUNet function for VapourSynth |
2025-01-26 12:09:35 |
tritony |
0.0.20 |
Tiny configuration for Triton Inference Server |
2025-01-10 10:25:04 |
vsrealesrgan |
5.2.1 |
Real-ESRGAN function for VapourSynth |
2025-01-04 14:43:24 |
vsrife |
5.5.1 |
RIFE function for VapourSynth |
2025-01-04 14:34:20 |
triton-model-navigator |
0.13.1 |
Triton Model Navigator: An inference toolkit for optimizing and deploying machine learning models and pipelines on the Triton Inference Server and PyTriton. |
2025-01-03 12:24:17 |
vsmfdin |
1.0.0 |
MFDIN function for VapourSynth |
2024-12-07 15:30:17 |
tritonclient |
2.52.0 |
Python client library and utilities for communicating with Triton Inference Server |
2024-11-26 04:27:23 |
triton-model-analyzer |
1.46.0 |
Triton Model Analyzer is a tool to profile and analyze the runtime performance of one or more models on the Triton Inference Server |
2024-11-26 04:23:27 |
metalm-xclient |
0.1.0 |
雪浪模型推理服务的客户端 |
2024-11-04 13:31:32 |
torch-tensorrt |
2.5.0 |
Torch-TensorRT is a package which allows users to automatically compile PyTorch and TorchScript modules to TensorRT while remaining in PyTorch |
2024-10-18 01:21:50 |
tensorrt-yolo |
4.3.1 |
Your YOLO Deployment Powerhouse. With the synergy of TensorRT Plugins, CUDA Kernels, and CUDA Graphs, experience lightning-fast inference speeds. |
2024-10-14 08:21:19 |
vsanimesr |
2.0.0 |
AnimeSR function for VapourSynth |
2024-07-06 13:03:07 |
xuelang-Xclient |
0.0.9 |
Triton Inference Server Client |
2024-06-10 12:12:58 |
vsfemasr |
2.0.0 |
FeMaSR function for VapourSynth |
2024-06-02 12:11:21 |
vsdpir |
4.1.0 |
DPIR function for VapourSynth |
2024-05-19 06:32:35 |
optimum-nvidia |
0.1.0b6 |
Optimum Nvidia is the interface between the Hugging Face Transformers and NVIDIA GPUs. " |
2024-04-11 21:13:38 |